Telegram Group & Telegram Channel
💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: The Illusion of Unlearning: The Unstable Nature of Machine Unlearning in Text-to-Image Diffusion Models


🔸 Presenter: Aryan Komaei

🌀 Abstract:
This paper tackles a critical issue in text-to-image diffusion models like Stable Diffusion, DALL·E, and Midjourney. These models are trained on massive datasets, often containing private or copyrighted content, which raises serious legal and ethical concerns. To address this, machine unlearning methods have emerged, aiming to remove specific information from the models. However, this paper reveals a major flaw: these unlearned concepts can come back when the model is fine-tuned. The authors introduce a new framework to analyze and evaluate the stability of current unlearning techniques and offer insights into why they often fail, paving the way for more robust future methods.

Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 11:00 - 12:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️



tg-me.com/RIMLLab/213
Create:
Last Update:

💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: The Illusion of Unlearning: The Unstable Nature of Machine Unlearning in Text-to-Image Diffusion Models


🔸 Presenter: Aryan Komaei

🌀 Abstract:
This paper tackles a critical issue in text-to-image diffusion models like Stable Diffusion, DALL·E, and Midjourney. These models are trained on massive datasets, often containing private or copyrighted content, which raises serious legal and ethical concerns. To address this, machine unlearning methods have emerged, aiming to remove specific information from the models. However, this paper reveals a major flaw: these unlearned concepts can come back when the model is fine-tuned. The authors introduce a new framework to analyze and evaluate the stability of current unlearning techniques and offer insights into why they often fail, paving the way for more robust future methods.

Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 11:00 - 12:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️

BY RIML Lab


Warning: Undefined variable $i in /var/www/tg-me/post.php on line 283

Share with your friend now:
tg-me.com/RIMLLab/213

View MORE
Open in Telegram


RIML Lab Telegram | DID YOU KNOW?

Date: |

If riding a bucking bronco is your idea of fun, you’re going to love what the stock market has in store. Consider this past week’s ride a preview.The week’s action didn’t look like much, if you didn’t know better. The Dow Jones Industrial Average rose 213.12 points or 0.6%, while the S&P 500 advanced 0.5%, and the Nasdaq Composite ended little changed.

Tata Power whose core business is to generate, transmit and distribute electricity has made no money to investors in the last one decade. That is a big blunder considering it is one of the largest power generation companies in the country. One of the reasons is the company's huge debt levels which stood at ₹43,559 crore at the end of March 2021 compared to the company’s market capitalisation of ₹44,447 crore.

RIML Lab from sa


Telegram RIML Lab
FROM USA